Using LLMs Beyond the Chatbot | Stories From The Hackery
Update: 2024-05-08
Description
As we continue to discuss generative AI on Nashville Software School’s podcast, Stories from the Hackery, Founder and CEO John Wark and lead Data instructor Michael Holloway, dive into various techniques for leveraging large language models (LLMs) like generative AI. They explore the potential of using hosted public LLMs via chatbot interfaces and discuss strategies for embedding LLMs into applications. One such technique discussed is the use of a prompt engineering, which involves wrapping the LLM API to tailor user prompts for more effective responses.
They also discuss more advanced techniques like retrieval-augmented generation (RAG), which involves using external data to tailor LLM responses further. This approach helps mitigate challenges like hallucination and ensures contextually relevant responses. Additionally, they touch on fine-tuning LLMs for specific applications, which requires more computational resources and domain expertise.
John and Michael highlight the importance of having machine learning skills to implement these techniques effectively. While fine-tuning LLMs may require specialized skills and resources, the emergence of smaller LLMs makes certain applications more accessible. They also mention the potential of multi-agent models for deeper and more focused outputs, indicating an exciting direction for LLM applications.
For more information on the evolving landscape of LLMs and the need for organizations to stay informed about these advancements to harness their full potential in this episode of Stories from the Hackery by Nashville Software School.
START YOUR NSS JOURNEY
To learn more about Nashville Software School and our upcoming programs, visit our website at https://NashvilleSoftwareSchool.com
SUPPORT NSS
Want to support NSS in our mission to teach adults skills needed for careers in tech? Visit our website to donate to the scholarship fund and learn about other volunteer opportunities! Nashss.com/Give
CHAPTERS:
00:00 - Introduction
01:57 - Public Chat Bot Usage
02:47 - Prompt Engineering
03:21 - Retrieval Augmented Generation (RAG)
3:57 - Fine Tuning of Models
04:37 - Technical Implementation
05:10 - Product Engineering and Its Role
08:17 - Implementing Prompt and Product Engineering
10:15 - Utilizing External Context with RAG
11:20 - Responsible AI Considerations
16:57 - Overcoming Challenges and Limitations
23:53 - Future Trends and Considerations
29:48 - Prompt and product engineering techniques
They also discuss more advanced techniques like retrieval-augmented generation (RAG), which involves using external data to tailor LLM responses further. This approach helps mitigate challenges like hallucination and ensures contextually relevant responses. Additionally, they touch on fine-tuning LLMs for specific applications, which requires more computational resources and domain expertise.
John and Michael highlight the importance of having machine learning skills to implement these techniques effectively. While fine-tuning LLMs may require specialized skills and resources, the emergence of smaller LLMs makes certain applications more accessible. They also mention the potential of multi-agent models for deeper and more focused outputs, indicating an exciting direction for LLM applications.
For more information on the evolving landscape of LLMs and the need for organizations to stay informed about these advancements to harness their full potential in this episode of Stories from the Hackery by Nashville Software School.
START YOUR NSS JOURNEY
To learn more about Nashville Software School and our upcoming programs, visit our website at https://NashvilleSoftwareSchool.com
SUPPORT NSS
Want to support NSS in our mission to teach adults skills needed for careers in tech? Visit our website to donate to the scholarship fund and learn about other volunteer opportunities! Nashss.com/Give
CHAPTERS:
00:00 - Introduction
01:57 - Public Chat Bot Usage
02:47 - Prompt Engineering
03:21 - Retrieval Augmented Generation (RAG)
3:57 - Fine Tuning of Models
04:37 - Technical Implementation
05:10 - Product Engineering and Its Role
08:17 - Implementing Prompt and Product Engineering
10:15 - Utilizing External Context with RAG
11:20 - Responsible AI Considerations
16:57 - Overcoming Challenges and Limitations
23:53 - Future Trends and Considerations
29:48 - Prompt and product engineering techniques
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
In Channel